58 research outputs found

    Foreground separation methods for satellite observations of the cosmic microwave background

    Get PDF
    A maximum entropy method (MEM) is presented for separating the emission due to different foreground components from simulated satellite observations of the cosmic microwave background radiation (CMBR). In particular, the method is applied to simulated observations by the proposed Planck Surveyor satellite. The simulations, performed by Bouchet and Gispert (1998), include emission from the CMBR, the kinetic and thermal Sunyaev-Zel'dovich (SZ) effects from galaxy clusters, as well as Galactic dust, free-free and synchrotron emission. We find that the MEM technique performs well and produces faithful reconstructions of the main input components. The method is also compared with traditional Wiener filtering and is shown to produce consistently better results, particularly in the recovery of the thermal SZ effect.Comment: 31 pages, 19 figures (bitmapped), accpeted for publication in MNRA

    Novel quantum initial conditions for inflation

    Get PDF
    We present a novel approach for setting initial conditions on the mode functions of the Mukhanov Sazaki equation. These conditions are motivated by minimisation of the renormalised stress-energy tensor, and are valid for setting a vacuum state even in a context where the spacetime is changing rapidly. Moreover, these alternative conditions are potentially observationally distinguishable. We apply this to the kinetically dominated universe, and compare with the more traditional approach.Science and Technology Facilities CouncilThis is the author accepted manuscript. The final version is available from the American Physical Society via http://dx.doi.org/10.1103/PhysRevD.94.02404

    Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    Get PDF
    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.AJKC acknowledges support from the Jet Propulsion Laboratory (JPL) Research and Technology Development programme. SH thanks the Science and Technology Facilities Council (STFC) for financial support. CJM acknowledges financial support provided under the European Union’s H2020 ERC Consolidator Grant ‘Matter and strong-field gravity: New frontiers in Einstein’s theory’ grant agreement no. MaGRaTh646597, and networking support by the COST Action CA16104. Parts of this work were performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk/), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England and funding from STFC. Parts of this work were also undertaken on the COSMOS Shared Memory system at DAMTP, University of Cambridge operated on behalf of the STFC DiRAC HPC Facility; this equipment is funded by BIS National E-infrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1. Parts of this work were also carried out at JPL, California Institute of Technology, under a contract with the National Aeronautics and Space Administration

    Constraining the dark energy equation of state using Bayes theorem and the Kullback–Leibler divergence

    Get PDF
    Data-driven model-independent reconstructions of the dark energy equation of state ww(zz) are presented using Planck\textit{Planck} 2015 era cosmic microwave background, baryonic acoustic oscillations (BAO), Type Ia supernova (SNIa) and Lyman α\alpha (Lyα\alpha) data. These reconstructions identify the ww(zz) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < zz < 3. Although the concordance Λ\Lambda cold dark matter (Λ\LambdaCDM) model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other, a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ\sigma confidence intervals of the posterior distribution. To identify the power of different data sets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible data set combination. The SNIa and BAO data sets are shown to provide much more constraining power in comparison to the Lyα\alpha data sets. Further, SNIa and BAO constrain most strongly around redshift range 0.1–0.5, whilst the Lyα\alpha data constrain weakly over a broader range. We do not attribute the supernegative favouring to any particular data set, and note that the Λ\LambdaCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred ww(zz) structure in the data.This work was performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England and funding from the Science and Technology Facilities Council (STFC). Parts of this work were undertaken on the COSMOS Shared Memory system at DAMTP, University of Cambridge operated on behalf of the STFC DiRAC HPC Facility; this equipment is funded by BIS National E-infrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1. SH and WJH thank STFC for fi- nancial support

    Results of Combining Peculiar Velocity, CMB and Type 1a Supernova Cosmological Parameter Information

    Get PDF
    We compare and combine likelihood functions of the cosmological parameters Ωm, h and σ8, from peculiar velocities, cosmic microwave background (CMB) and type Ia supernovae. These three data sets directly probe the mass in the Universe, without the need to relate the galaxy distribution to the underlying mass via a ‘biasing’ relation. We include the recent results from the CMB experiments BOOMERANG and MAXIMA-1. Our analysis assumes a flat Λ cold dark matter (ΛCDM) cosmology with a scale-invariant adiabatic initial power spectrum and baryonic fraction as inferred from big-bang nucleosynthesis. We find that all three data sets agree well, overlapping significantly at the 2σ level. This therefore justifies a joint analysis, in which we find a joint best-fitting point and 95 per cent confidence limits of (0.17,0.39), (0.64,0.86) and (0.98,1.37). In terms of the natural parameter combinations for these data (0.40,0.73), (0.16,0.27). Also for the best-fitting point, and the age of the Universe is 13.2 Gyr
    • …
    corecore